Reduction of variance for Gaussian densities via restriction to convex sets
نویسندگان
چکیده
منابع مشابه
Variance Reduction for Faster Non-Convex Optimization
We consider the fundamental problem in non-convex optimization of efficiently reaching a stationary point. In contrast to the convex case, in the long history of this basic problem, the only known theoretical results on first-order non-convex optimization remain to be full gradient descent that converges in O(1/ε) iterations for smooth objectives, and stochastic gradient descent that converges ...
متن کاملGaussian Correlation Conjecture for Symmetric Convex Sets
Gaussian correlation conjecture states that the Gaussian measure of the intersection of two symmetric convex sets is greater or equal to the product of the measures. In this paper, firstly we prove that the inequality holds when one of the two convex sets is the intersection of finite centered ellipsoids and the other one is simply symmetric. Then we prove that any symmetric convex set can be a...
متن کاملVariance Reduction Methods for Simulation of Densities on Wiener Space
We develop a general error analysis framework for the Monte Carlo simulation of densities for functionals in Wiener space. We also study variance reduction methods with the help of Malliavin derivatives. For this, we give some general heuristic principles which are applied to di usion processes. A comparison with kernel density estimates is made. Departament d'Economia, Universitat Pompeu Fabra...
متن کاملVariance Reduction via Lattice
This is a review article on lattice methods for multiple integration over the unit hypercube, with a variance-reduction viewpoint. It also contains some new results and ideas. The aim is to examine the basic principles supporting these methods and how they can be used eeectively for the simulation models that are typically encountered in the area of Management Science. These models can usually ...
متن کاملUsing Gaussian Processes for Variance Reduction in Policy Gradient Algorithms*
Gradient based policy optimization algorithms suffer from high gradient variance, this is usually the result of using Monte Carlo estimates of the Qvalue function in the gradient calculation. By replacing this estimate with a function approximator on state-action space, the gradient variance can be reduced significantly. In this paper we present a method for the training of a Gaussian Process t...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Multivariate Analysis
سال: 1977
ISSN: 0047-259X
DOI: 10.1016/0047-259x(77)90032-x